129 research outputs found

    Spotted Lanternflies: Their Harm and How to Help

    Get PDF

    Constraining the Size Growth of the Task Space with Socially Guided Intrinsic Motivation using Demonstrations

    Get PDF
    This paper presents an algorithm for learning a highly redundant inverse model in continuous and non-preset environments. Our Socially Guided Intrinsic Motivation by Demonstrations (SGIM-D) algorithm combines the advantages of both social learning and intrinsic motivation, to specialise in a wide range of skills, while lessening its dependence on the teacher. SGIM-D is evaluated on a fishing skill learning experiment.Comment: JCAI Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Barcelona : Spain (2011

    Whom Will an Intrinsically Motivated Robot Learner Choose to Imitate from?

    Get PDF
    This paper studies an interactive learning system that couples internally guided learning and social interaction in the case it can interact with several teachers. Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIMIM) is an algorithm for robot learning of motor skills in highdimensional, continuous and non-preset environments, with two levels of active learning: SGIM-IM actively decides at a metalevel when and to whom to ask for help; and an active choice of goals in autonomous exploration. We illustrate through an air hockey game that SGIM-IM efficiently chooses the best strategy

    Interactive learning gives the tempo to an intrinsically motivated robot learner

    Get PDF
    International audienceThis paper studies an interactive learning system that couples internally guided learning and social interaction for robot learning of motor skills. We present Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIM-IM), an algorithm for learning forward and inverse models in high-dimensional, continuous and non-preset environments. The robot actively self-determines: at a meta level a strategy, whether to choose active autonomous learning or social learning strategies; and at the task level a goal task in autonomous exploration. We illustrate through 2 experimental set-ups that SGIM-IM efficiently combines the advantages of social learning and intrinsic motivation to be able to produce a wide range of effects in the environment, and develop precise control policies in large spaces, while minimising its reliance on the teacher, and offering a flexible interaction framework with human

    Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis

    Full text link
    Open-ended learning benefits immensely from the use of symbolic methods for goal representation as they offer ways to structure knowledge for efficient and transferable learning. However, the existing Hierarchical Reinforcement Learning (HRL) approaches relying on symbolic reasoning are often limited as they require a manual goal representation. The challenge in autonomously discovering a symbolic goal representation is that it must preserve critical information, such as the environment dynamics. In this paper, we propose a developmental mechanism for goal discovery via an emergent representation that abstracts (i.e., groups together) sets of environment states that have similar roles in the task. We introduce a Feudal HRL algorithm that concurrently learns both the goal representation and a hierarchical policy. The algorithm uses symbolic reachability analysis for neural networks to approximate the transition relation among sets of states and to refine the goal representation. We evaluate our approach on complex navigation tasks, showing the learned representation is interpretable, transferrable and results in data efficient learning
    corecore